Deep Residual Correction Network for Partial Domain Adaptation
نویسندگان
چکیده
Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source to different but related unlabeled target domain. Most existing works assume and data share the identical label space, which is often difficult be satisfied in many real-world applications. With emergence of big data, there more practical scenario called partial adaptation, where we are always accessible large-scale while working on relative small-scale In this case, conventional assumption should relaxed, space tends subset space. Intuitively, reinforcing positive effects most relevant subclasses reducing negative impacts irrelevant vital importance address challenge. This paper proposes an efficiently-implemented Residual Correction Network (DRCN) plugging one residual block into network along with task-specific feature layer, effectively enhances explicitly weakens influence classes. Specifically, plugged block, consists several fully-connected layers, could deepen basic boost its representation capability correspondingly. Moreover, design weighted class-wise alignment loss couple two domains matching distributions shared classes between target. Comprehensive experiments partial, traditional fine-grained cross-domain visual recognition demonstrate that DRCN superior competitive deep approaches.
منابع مشابه
Residual Parameter Transfer for Deep Domain Adaptation
The goal of Deep Domain Adaptation is to make it possible to use Deep Nets trained in one domain where there is enough annotated training data in another where there is little or none. Most current approaches have focused on learning feature representations that are invariant to the changes that occur when going from one domain to the other, which means using the same network parameters in both...
متن کاملDeep Transfer Network: Unsupervised Domain Adaptation
Domain adaptation aims at training a classifier in one dataset and applying it to a related but not identical dataset. One successfully used framework of domain adaptation is to learn a transformation to match both the distribution of the features (marginal distribution), and the distribution of the labels given features (conditional distribution). In this paper, we propose a new domain adaptat...
متن کاملDeep Unsupervised Domain Adaptation for Image Classification via Low Rank Representation Learning
Domain adaptation is a powerful technique given a wide amount of labeled data from similar attributes in different domains. In real-world applications, there is a huge number of data but almost more of them are unlabeled. It is effective in image classification where it is expensive and time-consuming to obtain adequate label data. We propose a novel method named DALRRL, which consists of deep ...
متن کاملDeep Hashing Network for Unsupervised Domain Adaptation Supplementary Material
1. Loss Function Derivative In this section we outline the derivative of Equation 8 for the backpropagation algorithm; min U J = L(Us) + γM(Us, Ut) + ηH(Us, Ut), (8) where, U := {Us ∪ Ut} and (γ, η) control the importance of domain adaptation (1) and target entropy loss (7) respectively. In the following subsections, we outline the derivative of the individual terms w.r.t. the input U. 1.1. Der...
متن کاملSample-oriented Domain Adaptation for Image Classification
Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. The conventional image processing algorithms cannot perform well in scenarios where the training images (source domain) that are used to learn the model have a different distribution with test images (target domain). Also, many real world applicat...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence
سال: 2021
ISSN: ['1939-3539', '2160-9292', '0162-8828']
DOI: https://doi.org/10.1109/tpami.2020.2964173